71 research outputs found

    Stellaris: An RDF-based Information Service for AstroGrid-D

    Get PDF
    We present Stellaris, the information service of the community project AstroGrid-D. Stellaris is the core component of the AstroGrid-D middleware that enables scientists to share their resources, provides access to large datasets and integrates instruments such as robotic telescopes. Besides the many diverse types of resources, the information service also supports a wide range of use cases each using a specific schema for the metadata. In addition, Stellaris addresses the distributed and dynamic nature of collaborations in the astronomers’ community. Stellaris satisfies these requirements by adopting RDF and SPARQL for storing and querying metadata. Our paper focuses on the requirements of the community, presents the architecture of the information service in detail and discusses experiences with the prototype already in use by partners within the project

    Parsing XML Using Parallel Traversal of Streaming Trees

    Full text link
    Abstract. XML has been widely adopted across a wide spectrum of applica-tions. Its parsing efficiency, however, remains a concern, and can be a bottleneck. With the current trend towards multicore CPUs, parallelization to improve per-formance is increasingly relevant. In many applications, the XML is streamed from the network, and thus the complete XML document is never in memory at any single moment in time. Parallel parsing of such a stream can be equated to parallel depth-first traversal of a streaming tree. Existing research on parallel tree traversal has assumed the entire tree was available in-memory, and thus cannot be directly applied. In this paper we investigate parallel, SAX-style parsing of XML via a parallel, depth-first traversal of the streaming document. We show good scalability up to about 6 cores on a Linux platform.

    Challenges in QCD matter physics - The Compressed Baryonic Matter experiment at FAIR

    Full text link
    Substantial experimental and theoretical efforts worldwide are devoted to explore the phase diagram of strongly interacting matter. At LHC and top RHIC energies, QCD matter is studied at very high temperatures and nearly vanishing net-baryon densities. There is evidence that a Quark-Gluon-Plasma (QGP) was created at experiments at RHIC and LHC. The transition from the QGP back to the hadron gas is found to be a smooth cross over. For larger net-baryon densities and lower temperatures, it is expected that the QCD phase diagram exhibits a rich structure, such as a first-order phase transition between hadronic and partonic matter which terminates in a critical point, or exotic phases like quarkyonic matter. The discovery of these landmarks would be a breakthrough in our understanding of the strong interaction and is therefore in the focus of various high-energy heavy-ion research programs. The Compressed Baryonic Matter (CBM) experiment at FAIR will play a unique role in the exploration of the QCD phase diagram in the region of high net-baryon densities, because it is designed to run at unprecedented interaction rates. High-rate operation is the key prerequisite for high-precision measurements of multi-differential observables and of rare diagnostic probes which are sensitive to the dense phase of the nuclear fireball. The goal of the CBM experiment at SIS100 (sqrt(s_NN) = 2.7 - 4.9 GeV) is to discover fundamental properties of QCD matter: the phase structure at large baryon-chemical potentials (mu_B > 500 MeV), effects of chiral symmetry, and the equation-of-state at high density as it is expected to occur in the core of neutron stars. In this article, we review the motivation for and the physics programme of CBM, including activities before the start of data taking in 2022, in the context of the worldwide efforts to explore high-density QCD matter.Comment: 15 pages, 11 figures. Published in European Physical Journal

    GuiGen: A Toolset for Creating Customized Interfaces for Grid User Communities

    No full text
    GuiGen is a comprehensive set of tools for creating customized graphical user interfaces (GUIs). It draws from the concept of computing portals, which are here seen as interfaces to application-specific computing services for user communities. While GuiGen was originally designed for the use in computational grids, it can be used in client/server environments as well. Compared to other GUI generators, GuiGen is more versatile and more portable. It can be employed in many different application domains and on different target platforms. With GuiGen, application experts (rather than computer scientists) are able to create their own individually tailored GUIs. Key words: Grid computing; customized user interfaces; grid user communities; web portals; XML.

    Resource reservations with fuzzy requests

    No full text
    We present a scheme for reserving job resources with imprecise requests. Typical parameters such as the estimated runtime, the start time or the type or number of required CPUs need not be fixed at submission time but can be kept fuzzy in some aspects. Users may specify a list of preferences which guide the system in determining the best matching resources for the given job. Originally, the impetus for our work came from the need for efficient co-reservation mechanisms in the Grid where rigid constraints on multiple job components make it often difficult to find a feasible solution. Our method for handling fuzzy reservation requests gives the users more freedom to specify the requirements and it gives the Grid Reservation Service more flexibility to find optimal solutions. In the future, we will extend our methods to process co-reservations. We evaluated our algorithms with real workload traces from a large supercomputer site. The results indicate that our scheme greatly improves the flexibility of the solution process without much affecting the overall workload of a site. From a user’s perspective only about 10 % of the non-reservation jobs have a longer response time, and from a site-administrator’s view the makespan of the original workload is extended by only 8 % in the worst case

    Metacomputing in Practice: A Distributed Compute Server for Pharmaceutical Industry

    No full text
    We describe a distributed high-performance compute server that has been implemented for running compute-intensive applications on a mixture of HPC systems interconnected by Inter- and Intranet. With a practical industrial background, our work focusses on high availability, efficient job load-balancing, security, and the easy integration of HPC computing into the daily work-flow at pharmaceutical companies. The work was done in the course of the ESPRIT project Phase (A Distributed Pharmaceutical Application Server). The client software is implemented in Java. All results are displayed in a web browser and can be forwarded to the next stage of applications used in the drug design cycle. The server software handles the job load-balancing between the participating HPC nodes and is capable of managing multi-site applications. Our environment currently supports four key applications that are used in rational drug design and drug target identification. They range from the automatic functiona..

    Low overhead alternatives to SSS

    No full text
    Of the many minimax algorithms, SSS* is noteworthy because it usually searches the smallest game trees. Its success can be attributed to the accumulation and use of information acquired while traversing the tree. The main disadvantages of SSS* are its high storage needs and management costs. This paper describes a class of methods, based on the popular alpha-beta algorithm, that acquire and use information to guide a tree search. They retain a given search direction and yet are as good as SSS*, even while searching random trees. Further, although some of these new algorithms also require substantial storage, they are more flexible and can be programmed to use onl
    corecore